Towards Scalable MDP Algorithms
نویسندگان
چکیده
Since the emergence of Artificial Intelligence as a field, planning under uncertainty has been viewed as one of its crucial subareas. Accordingly, the current lack of scalability of probabilistic planning techniques is a major reason why the grand vision of AI has not been fulfilled yet. Besides being an obstacle to advances in AI, scalability issues also hamper the applicability of probabilistic planning to real-world problems. A powerful framework for describing probabilistic planning problems is Markov Decision Processes (MDPs). Informally, an MDP specifies the objectives the agent is trying to achieve, the actions the agent can perform, and the states in which it can end up while working towards the objective. Solving an MDP means finding a policy, i.e. an assignment of actions to states, that allows the agent to achieve the objective. Optimal solution methods, those that look for the “best” policy according to some criterion, typically try to analyze all possible states or a large fraction of them. Since state space sizes of realistic scenarios can be astronomical, these algorithms quickly run out of memory. Fortunately, the mathematical structure of some classes of MDPs has allowed for inventing more efficient algorithms. This is the case for Stochastic Shortest Path (SSP) problems, whose mathematical properties gave rise to a family of algorithms called Find-and-Revise. When used in combination with a heuristic, the members of this family can find a near-optimal, or even optimal, policy while avoiding the analysis of many states. Nonetheless, the sheer number of states in real-world problems forces algorithms based on state-level analysis to exhaust their capabilities much too early, calling for a fundamentally different approach. Moreover, several expressive and potentially useful MDP classes are currently not known to have a mathematical structure for creating efficient approximation techniques. This dissertation advances the state of the art in probabilistic planning in three complementary ways. For SSP MDPs, it derives a novel class of approximate algorithms based on generalizing state analysis information across many states. This information is stored in the form of basis functions that are generated automatically and the number of which is much smaller than the number of states. As a result, the proposed algorithms have a very compact memory footprint and arrive at high-quality solutions faster than their state-based counterparts. In a parallel effort, the dissertation also builds up mathematical apparatus for classes of MDPs that previously had no applicable approximation algorithms. The developed theory enables the extension of the powerful Find-and-Revise paradigm to these MDP classes as well, providing the first memory-efficient algorithms for solving them. Last but not least, the dissertation will apply the proposed theoretical techniques to a large-scale real-world problem, urban traffic routing being one of the candidates.
منابع مشابه
Intelligent scalable image watermarking robust against progressive DWT-based compression using genetic algorithms
Image watermarking refers to the process of embedding an authentication message, called watermark, into the host image to uniquely identify the ownership. In this paper a novel, intelligent, scalable, robust wavelet-based watermarking approach is proposed. The proposed approach employs a genetic algorithm to find nearly optimal positions to insert watermark. The embedding positions coded as chr...
متن کاملAn Application of the ABS LX Algorithm to Multiple Sequence Alignment
We present an application of ABS algorithms for multiple sequence alignment (MSA). The Markov decision process (MDP) based model leads to a linear programming problem (LPP), whose solution is linked to a suggested alignment. The important features of our work include the facility of alignment of multiple sequences simultaneously and no limit for the length of the sequences. Our goal here is to ...
متن کاملA Coordinated MDP Approach to Multi-Agent Planning for Resource Allocation, with Applications to Healthcare
This paper considers a novel approach to scalable multiagent resource allocation in dynamic settings. We propose an approximate solution in which each resource consumer is represented by an independent MDP-based agent that models expected utility using an average model of its expected access to resources given only limited information about all other agents. A global auction-based mechanism is ...
متن کاملTowards Practical Theory: Bayesian Optimization and Optimal Exploration
This thesis discusses novel principles to improve the theoretical analyses of a class of methods, aiming to provide theoretically driven yet practically useful methods. The thesis focuses on a class of methods, called bound-based search, which includes several planning algorithms (e.g., the A* algorithm and the UCT algorithm), several optimization methods (e.g., Bayesian optimization and Lipsch...
متن کاملExpediting RL by Using Graphical Structures (Short Paper)
The goal of Reinforcement learning (RL) is to maximize reward (minimize cost) in a Markov decision process (MDP) without knowing the underlying model a priori. RL algorithms tend to be much slower than planning algorithms, which require the model as input. Recent results demonstrate that MDP planning can be expedited, by exploiting the graphical structure of the MDP. We present extensions to tw...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2011